(e talvez RSample)
Carolina Musso
Estatística Computacional
Uma outra forma de enxergar a probabilidade/incerteza
Frequentistas
Bayesianos
Bayes/Laplace: séc. 18 e 19
\[ P (A|B) = \frac{P(B|A)*P(A)}{P(B)} \]
P(B|A): verossimilhança
P(A): incerteza sobre o parâmetro (priori)
P(B) é a constatante normalizadora
Avanços recentes: ganho de poder computacional.
\[ \pi(\theta|y) \propto \pi(y|\theta) * \pi(\theta) \]
\[ posteriori = verossimilhança * priori \]
Consigo atualizar minhas estimativas e interpretar os resultados de forma mais direta
Não tem almoço gratis
Definir a priori é subjetivo.
Computar a posteriori as vezes é bem difícil
Maldição da dimensionalidade
Um exemplo inédito: lançamento de uma moeda
Escrever um script que defina um modelo bayesiano
Conseguir amostrar e medir parâmetros da distribuição posteriori
STAN, 2012 (atualmente v.2.6)
Stanislaw Ulam, matetático polonês que desenvolveu o método de Monte Carlo nos anos 40.
“Sampling Through Adaptive Neighborhoods”
Software em C++ gratuito de código aberto
Capaz de
Inferência Bayesiana com amostrador MCMC (NUTS, HMC)
Inferência Bayesiana aproximada com “inferência variacional” (ADVI)
Estimativa de máxima verossimilhança penalizada com otimização (L-BFGS)
maioria das aplicações estatísticas.
Pode-se implementar automaticamente condicionamento.
Usa (ou usava) o amostrador no-U-turn (Hoffman & Gelman, 2014), que advém do Monte Carlo Hamiltoniano (Neal, 2011), que é um MCMC generalizado Metropolis.
adaptativo
performa múltiplos passos por iteração para se mover de forma mais eficiente para a posteriori.
altamente escalável
Stan:A Probabilistic Programming Language for Bayesian Inference and Optimization
\[ \pi(y,\theta) = \pi(y|\theta) * \pi(\theta) \]
Encontrar a distribuição conjunta
A linguagem de modelagem especifica os elementos em blocos:
dados
parâmetros
modelo
Palestra bem completa do Michael Betancour
Primeiro bloco: espeço observado
Segundo bloco: parâmetros
Terceiro bloco: modelo
Declarar as variáveis
Os valores específicos serão fornecidos pelos algortimos que avaliam o Stan.
O codigo é traduzido pra C++ e compilado.
Bibliotecas convertem em um programa executável capaz de avaliar a log-posteriori e a função gradiente
R versão >= 3.4.0 or later, de preferência >= 4.0.0
Recomenda-se RStudio version >= 1.4.
Necessário configurar o R para ser capaz de compilar código C++ code: Windows, Mac, Linux.
Call:
lm(formula = y ~ ., data = data.frame(X[, -1]))
Residuals:
Min 1Q Median 3Q Max
-6.3624 -1.2669 0.0346 1.3258 6.9920
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 4.99245 0.06516 76.624 <2e-16 ***
X....1. 0.15604 0.06500 2.401 0.0166 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 2.06 on 998 degrees of freedom
Multiple R-squared: 0.005741, Adjusted R-squared: 0.004745
F-statistic: 5.763 on 1 and 998 DF, p-value: 0.01655
Dados devem ser uma lista!
Vai demorar um pouquinho …
SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1:
Chain 1: Gradient evaluation took 7.7e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.77 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1:
Chain 1:
Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
Chain 1:
Chain 1: Elapsed Time: 0.089 seconds (Warm-up)
Chain 1: 0.086 seconds (Sampling)
Chain 1: 0.175 seconds (Total)
Chain 1:
SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2:
Chain 2: Gradient evaluation took 4.5e-05 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.45 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2:
Chain 2:
Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
Chain 2:
Chain 2: Elapsed Time: 0.086 seconds (Warm-up)
Chain 2: 0.089 seconds (Sampling)
Chain 2: 0.175 seconds (Total)
Chain 2:
SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
Chain 3:
Chain 3: Gradient evaluation took 4.6e-05 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.46 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3:
Chain 3:
Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup)
Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup)
Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup)
Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup)
Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%] (Sampling)
Chain 3:
Chain 3: Elapsed Time: 0.087 seconds (Warm-up)
Chain 3: 0.092 seconds (Sampling)
Chain 3: 0.179 seconds (Total)
Chain 3:
SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
Chain 4:
Chain 4: Gradient evaluation took 2.4e-05 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.24 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4:
Chain 4:
Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup)
Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup)
Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup)
Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup)
Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%] (Sampling)
Chain 4:
Chain 4: Elapsed Time: 0.091 seconds (Warm-up)
Chain 4: 0.089 seconds (Sampling)
Chain 4: 0.18 seconds (Total)
Chain 4:
Inference for Stan model: anon_model.
4 chains, each with iter=2000; warmup=1000; thin=1;
post-warmup draws per chain=1000, total post-warmup draws=4000.
mean se_mean sd 2.5% 50% 97.5% n_eff Rhat
beta[1] 4.991 0.001 0.064 4.866 4.992 5.118 3651 0.999
beta[2] 0.218 0.001 0.063 0.091 0.218 0.336 3870 1.000
sigma 1.975 0.001 0.044 1.894 1.974 2.062 3880 1.001
Samples were drawn using NUTS(diag_e) at Sat Feb 11 22:44:02 2023.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at
convergence, Rhat=1).
stan(...,
pars = NA, #se quiser salavr só alguns parâmetros
chains = 4, # numero de cadeiras paralelsas
iter = 2000, # numero de iterações
warmup = floor(iter/2), # burnin
thin = 1, # thining (período para se salvar amostras)
init = "random", # valores iniciais
seed = sample.int(.Machine$integer.max, 1),
algorithm = c("NUTS", "HMC", "Fixed_param"),
...)Uma interface da interface?
“The tidymodels framework is a collection of packages for modeling and machine learning using tidyverse principles”
library(tidymodels)
bayes_mod <-
linear_reg() %>%
set_engine("stan",
prior_intercept = beta_prior,
prior = beta_prior,
prior_aux = sigma_prior)
bayes_fit <-
bayes_mod %>%
fit(y ~ x, data = data.frame(X[, -1]))
print(bayes_fit, digits = 5)parsnip model object
stan_glm
family: gaussian [identity]
formula: y ~ x
observations: 1000
predictors: 2
------
Median MAD_SD
(Intercept) 4.99110 0.06260
x 0.21982 0.05913
Auxiliary parameter(s):
Median MAD_SD
sigma 1.97422 0.04356
------
* For help interpreting the printed output see ?print.stanreg
* For info on the priors used see ?prior_summary.stanreg
Um estudo de caso, Rbloggers, Outro, Stan for epdemiology
Become a Baysian with R and Stan, Bayesian data analysis - RStan demos
R/Stan examples, Stan function references
Prior distributions for RStan models
Materiais da StanCon, Lista de Estudos de Casos - oficial
O pacote rsample fornece funções para criar diferentes tipos de reamostragens com o intuito de:
Os datasets criados são diretamente acessíveis no objeto mas sem consumir tanta memória.
2.64 MB
6.69 MB
133.74 kB
[1] 2.528489
rset`<Analysis/Assess/Total>
<32/14/32>
mpg cyl disp hp drat wt qsec vs am gear carb
Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1
Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1
Toyota Corolla.1 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1
AMC Javelin 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2
Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3
Porsche 914-2 26.0 4 120.3 91 4.43 2.140 16.70 0 1 5 2
AMC Javelin.1 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2
Dodge Challenger 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2
Merc 450SLC.1 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3
Porsche 914-2.1 26.0 4 120.3 91 4.43 2.140 16.70 0 1 5 2
Maserati Bora 15.0 8 301.0 335 3.54 3.570 14.60 0 1 5 8
Toyota Corona 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1
Valiant.1 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
Porsche 914-2.2 26.0 4 120.3 91 4.43 2.140 16.70 0 1 5 2
Lincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4
Dodge Challenger.1 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2
Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4
Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2
Camaro Z28 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4
Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1
Pontiac Firebird 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2
Fiat 128.1 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1
Ferrari Dino 19.7 6 145.0 175 3.62 2.770 15.50 0 1 5 6
Valiant.2 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
Toyota Corolla.2 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1
Pontiac Firebird.1 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2
Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
Mazda RX4.1 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4
Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4
Duster 360.1 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4
AMC Javelin.2 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2
mpg cyl disp hp drat wt qsec vs am gear carb
Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1
Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1
Toyota Corolla.1 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1
AMC Javelin 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2
Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1
Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3
mpg cyl disp hp drat wt qsec vs am gear carb
Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4
Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1
Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2
Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2
Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4
Merc 280C 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4
nonlin_form <-
as.formula(
VO2 ~ (t <= 5.883) * VO2rest +
(t > 5.883) *
(VO2rest + (VO2peak - VO2rest) * (1 - exp(-(t - 5.883) / mu)))
)
# valores iniciais por inspeção visual
start_vals <- list(VO2rest = 400, VO2peak = 1600, mu = 1)
res <- nls(nonlin_form, start = start_vals, data = O2K)
tidy(res)# A tibble: 3 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 VO2rest 357. 11.4 31.3 4.27e-26
2 VO2peak 1631. 21.5 75.9 1.29e-38
3 mu 1.19 0.0766 15.5 1.08e-16
# função para ajutar cada amostra ao modelo
fit_fun <- function(split, ...) {
nls(nonlin_form, data = analysis(split), ...) %>%
tidy()
}
set.seed(462)
nlin_bt <-
bootstraps(O2K, times = 2000, apparent = TRUE) %>%
mutate(models = map(splits, ~ fit_fun(.x, start = start_vals)))
nlin_bt# Bootstrap sampling with apparent sample
# A tibble: 2,001 × 3
splits id models
<list> <chr> <list>
1 <split [36/14]> Bootstrap0001 <tibble [3 × 5]>
2 <split [36/13]> Bootstrap0002 <tibble [3 × 5]>
3 <split [36/16]> Bootstrap0003 <tibble [3 × 5]>
4 <split [36/12]> Bootstrap0004 <tibble [3 × 5]>
5 <split [36/16]> Bootstrap0005 <tibble [3 × 5]>
6 <split [36/13]> Bootstrap0006 <tibble [3 × 5]>
7 <split [36/15]> Bootstrap0007 <tibble [3 × 5]>
8 <split [36/16]> Bootstrap0008 <tibble [3 × 5]>
9 <split [36/11]> Bootstrap0009 <tibble [3 × 5]>
10 <split [36/13]> Bootstrap0010 <tibble [3 × 5]>
# … with 1,991 more rows
# A tibble: 3 × 5
term estimate std.error statistic p.value
<chr> <dbl> <dbl> <dbl> <dbl>
1 VO2rest 359. 10.7 33.5 4.59e-27
2 VO2peak 1656. 31.1 53.3 1.39e-33
3 mu 1.23 0.113 10.9 2.01e-12
nls_coef <-
nlin_bt %>%
dplyr::select(-splits) %>%
# empilhar as colunas
unnest() %>%
dplyr::select(id, term, estimate)
head(nls_coef)# A tibble: 6 × 3
id term estimate
<chr> <chr> <dbl>
1 Bootstrap0001 VO2rest 359.
2 Bootstrap0001 VO2peak 1656.
3 Bootstrap0001 mu 1.23
4 Bootstrap0002 VO2rest 358.
5 Bootstrap0002 VO2peak 1662.
6 Bootstrap0002 mu 1.26
só de pensar que ainda tem uma lista pra entregar…